Tutorial on Variational Autoencoders
نویسنده
چکیده
In just three years, Variational Autoencoders (VAEs) have emerged as one of the most popular approaches to unsupervised learning of complicated distributions. VAEs are appealing because they are built on top of standard function approximators (neural networks), and can be trained with stochastic gradient descent. VAEs have already shown promise in generating many kinds of complicated data, including handwritten digits [1, 2], faces [1, 3, 4], house numbers [5, 6], CIFAR images [6], physical models of scenes [4], segmentation [7], and predicting the future from static images [8]. This tutorial introduces the intuitions behind VAEs, explains the mathematics behind them, and describes some empirical behavior. No prior knowledge of variational Bayesian methods is assumed.
منابع مشابه
Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks
Variational Autoencoders (VAEs) are expressive latent variable models that can be used to learn complex probability distributions from training data. However, the quality of the resulting model crucially relies on the expressiveness of the inference model. We introduce Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference mo...
متن کاملChallenges with Variational Autoencoders for Text
We study variational autoencoders for text data to build a generative model that can be used to conditionally generate text. We introduce a mutual information criterion to encourage the model to put semantic information into the latent representation, and compare its efficacy with other tricks explored in literature such as KL divergence cost annealing and word dropout. We compare the log-likel...
متن کاملHow to Train Deep Variational Autoencoders and Probabilistic Ladder Networks
Variational autoencoders are a powerful framework for unsupervised learning. However, previous work has been restricted to shallow models with one or two layers of fully factorized stochastic latent variables, limiting the flexibility of the latent representation. We propose three advances in training algorithms of variational autoencoders, for the first time allowing to train deep models of up...
متن کاملDebiasing Evidence Approximations: on Importance-weighted Autoencoders and Jackknife Variational Inference
The importance-weighted autoencoder (IWAE) approach of Burda et al. (2015) defines a sequence of increasingly tighter bounds on the marginal likelihood of latent variable models. Recently, Cremer et al. (2017) reinterpreted the IWAE bounds as ordinary variational evidence lower bounds (ELBO) applied to increasingly accurate variational distributions. In this work, we provide yet another perspec...
متن کاملVariational Composite Autoencoders
Learning in the latent variable model is challenging in the presence of the complex data structure or the intractable latent variable. Previous variational autoencoders can be low effective due to the straightforward encoder-decoder structure. In this paper, we propose a variational composite autoencoder to sidestep this issue by amortizing on top of the hierarchical latent variable model. The ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1606.05908 شماره
صفحات -
تاریخ انتشار 2016